Zeroism - Full Audiobook_transcription

[00:00] Hello friends, this is Brian Johnson, author of Zeroism.

[00:05] A few times in my life, I've read books that have changed the way I understand my life

[00:09] and reality.

[00:10] I hope this is one for you.

[00:12] Enjoy.

[00:17] Episode 1, A People's History of Zeroism

[00:23] Hello everyone, my name is Zero.

[00:27] In this video, I'm going to walk you through Zeroism, which is my personal philosophy for

[00:33] the future of being human.

[00:35] It's going to be a tour of the past, the future, and the interior of the human mind as it contemplates

[00:41] an existence so different, so unimaginable, that it drives many to an existential crisis.

[00:49] But first, I need to set the record straight.

[00:53] Yes, in 2022, I did receive one liter of my 17-year-old son's plasma.

[01:00] However, unlike what was reported in the media, my son is kept in a room that is 8x8, not

[01:08] 10x10.

[01:09] Also, I have high expectations of my son.

[01:13] Specifically, he must do the following things.

[01:16] First, do well in school.

[01:18] Second, complete his household chores.

[01:21] Third, give me one liter of his plasma.

[01:24] And fourth, clean his room.

[01:26] Now, I'm just kidding about cleaning his room.

[01:30] You might have just heard of me recently.

[01:33] Maybe this year?

[01:34] Maybe you heard about some rich guy doing crazy things and trying to live forever?

[01:39] Or maybe the YouTube algorithm brought you here.

[01:42] Isn't it funny how quickly we gave decision-making authority to AI about where our minds should

[01:48] go next?

[01:51] More on that later.

[01:52] Or maybe you're here because you read Ashley Vance's profile on me in Bloomberg, how to

[01:57] be 18 years old again for only $2 million a year.

[02:02] I was born 46 years ago.

[02:05] But my left ear is 64, my heart is 37, and my cardiovascular capacity is in the top 1.5%

[02:13] of 18-year-olds.

[02:16] After working on the scientific frontiers of anti-aging for three years, I'm not really

[02:20] sure how old I am anymore.

[02:23] My health endeavor blueprint has been full of surprises.

[02:26] For example, discovering that my left ear has a biological age of 64.

[02:31] I am basically deaf in my left ear from 4000 Hz to 12,000 Hz.

[02:37] I had no idea.

[02:39] As a kid, I shot a lot of guns and I listened to a lot of loud music.

[02:44] And we never wore hearing protection.

[02:46] It turns out, hearing protection is really important.

[02:49] It was also surprising that after sharing for free everything I was learning about my

[02:55] health and wellness, the global response was a tsunami of hate.

[03:00] I became detested from far-reaching corners of the globe.

[03:03] People called me a narcissist, a vampire, an elf, Patrick Bateman, Dorian Gray.

[03:09] Surely I must be an Illuminati lizard person.

[03:13] Shh, that's true.

[03:15] No part of me was off limits to ridicule.

[03:18] I was accused of being too pale, too skinny, too vain, too muscular, too reptilian, too

[03:25] idiotic, too too.

[03:28] Whatever that even means.

[03:30] Wild, what is happening?

[03:33] People confidently stated that Blueprint must exist because I am miserable and trapped

[03:38] in a cage of my own making.

[03:41] But in its implementation, I am missing the point of life.

[03:46] Why extend life?

[03:47] What's the point?

[03:48] They psychoanalyzed me, diagnosing me with a crippling fear of death and more personality

[03:54] disorders than there are people.

[03:57] They suggested I am a fool on a vain quest for immortality.

[04:01] But for the record, I have never said I am pursuing immortality.

[04:06] I have said we no longer know how long and how well we can live.

[04:11] They said I was yet another arbitrary person in yet another arbitrary time on another arbitrary

[04:18] continent on a failed quest for the Holy Grail.

[04:21] He'll do anything to avoid therapy, just live life.

[04:25] We're all going to die, so what does it matter?

[04:27] Were the common refrains.

[04:29] The reactions reminded me of patterns I've seen in the biographies I've read.

[04:34] In every era, 99% of those who lived, lived in the past, living by the ideas, norms and

[04:43] traditions of dead people.

[04:46] The future in every era had always arrived.

[04:49] They just hadn't seen it yet.

[04:52] Or they hadn't recognized it as the future, or they had seen it and then outright rejected

[04:57] it.

[04:58] The same is true for us right now.

[05:00] We are no different.

[05:02] The future is always here.

[05:04] It's just hiding in plain sight.

[05:08] It's a game of who can spot it.

[05:11] My Project Blueprint proposed that a radically different future is here and true to form,

[05:16] it's hiding in plain sight.

[05:18] And I'm going to tell you right now, here it is.

[05:22] Death might no longer be inevitable.

[05:26] When I said this, armies of death defenders pulled out their swords and charged me.

[05:31] I lived near Hollywood and no joke, movie studios came to me asking if they could learn

[05:36] the details of my life to inspire their next major villain.

[05:40] I admit, at first the vitriol confused me.

[05:45] But soon I realized that most of it was just group therapy.

[05:49] It is a response from a society addicted to addiction.

[05:54] I believe the animosity was all just a confession of helplessness.

[05:59] Amid the hate storm, no one seemed to understand what I was really proposing.

[06:05] Because if they did, they might have truly lost their minds.

[06:09] Alright, before we get into the details, let's start with some context.

[06:14] Like the whole context.

[06:17] For 13.8 billion years or so, who knows, the number is always changing.

[06:23] There has been a universe.

[06:25] Earth has been around for 4.5 billion years or so.

[06:29] We were single cells for most of that.

[06:32] Fast forward a bit and boom, homo sapiens.

[06:35] Different.

[06:36] We're so so different.

[06:37] We're intelligent.

[06:39] We think.

[06:40] We think about thinking.

[06:42] We lived what maybe a couple decades at most back then.

[06:45] We laughed.

[06:46] We loved.

[06:47] We reproduced.

[06:48] We died.

[06:49] Just like all other life.

[06:52] After a couple hundred thousand years, our species suddenly levels up.

[06:58] And massively.

[06:59] We get fire, language, agriculture, cities, riding, industry, electricity, nuclear, solar,

[07:07] internet, and then AI.

[07:10] It took almost 13.8 billion years, but finally, intelligence made more intelligence.

[07:20] Finally at long last, intelligence has made a different kind of intelligence that evolution

[07:25] couldn't create on its own.

[07:28] It's different, yes, but is it better?

[07:30] What would that even mean?

[07:32] We are in fact baby steps away from creating superintelligence, which very well might be

[07:39] the most extraordinary event in the history of the universe.

[07:44] Life was special.

[07:45] Life is special.

[07:47] But this, this is different.

[07:49] Superintelligence will rewrite reality.

[07:52] It will pop, one after another, the bubbles we live in, revealing to us dimensions and

[07:58] realities far beyond our imagination and comprehension.

[08:03] This is why I started Zeroism.

[08:06] Over the past few years, after hearing me explore and build the tenets and pillars of

[08:11] Zeroism in private, my friends and family nicknamed me Zero, and it stuck.

[08:17] Zero sits at the origin, at the zero-zero coordinate of the biggest revolutions in history.

[08:23] It is from this position that we are stepping into what could surely be the most magnificent

[08:30] existence in this part of the galaxy.

[08:32] But before we get there, I must tell you about this Brian Johnson guy and how he died to

[08:38] make room for Zero, as I think we may have more in common than you think.

[08:47] Number 2.

[08:49] Firing Evening Brian.

[08:52] I'm going to get personal here for a minute.

[08:55] In my 20s and 30s, life got pretty dark.

[08:59] I was raising three kids, wrestling a challenging personal relationship, starting and building

[09:05] multiple companies, and dealing with chronic depression.

[09:09] I was trying to leave my born into religion, and it all started to stack up.

[09:14] It was a bonfire.

[09:16] That night, after fighting the day's battles, and then feeding, bathing, telling stories

[09:21] to and putting the kids to bed, I often thought of one of Mother Nature's best defense mechanisms,

[09:27] perfected by the opossum.

[09:29] I just wanted to fall to the ground and play dead.

[09:35] Maybe I thought when it got really bad, I don't even have to play anymore.

[09:39] Maybe I didn't have to pretend anymore.

[09:43] Despite how I felt, at around 7pm each night, I would have the most comforting, optimistic

[09:48] thoughts.

[09:49] The brownies.

[09:51] Just eat the brownies.

[09:52] The kitchen isn't far.

[09:53] You deserve it.

[09:54] It was a rough day.

[09:55] All that stress burns so many calories.

[09:58] Thousands.

[09:59] You read that somewhere, right?

[10:00] Just one will make it all go away.

[10:03] Now in case you're wondering, that's Evening Brian talking.

[10:07] He was a pretty big personality in my life.

[10:09] He made all the decisions back then.

[10:11] He wanted everything now, without the burden of accountability.

[10:17] For years, Morning Brian promised that today was a new day.

[10:21] He wouldn't fall prey to the old bad habits.

[10:24] Dad Brian swore he'd be the exemplar father figure he'd always aspire to become.

[10:30] Then 7pm would come and Evening Brian would push all other Brian's aside and do as he

[10:36] pleased.

[10:37] Evening Brian is not a bad person.

[10:40] In fact, I deeply empathize with him.

[10:43] He's carrying the heavy burdens of all other Brian's.

[10:46] He was low on willpower and discipline and he just wanted the pain to stop.

[10:52] One day, as the brownies called from the kitchen, I took a quick stock of myself.

[10:58] I had gained 50 pounds, about the size of a Siberian Husky.

[11:02] I carried that canine-sized fat around with me every day.

[11:07] My pants were so tight I had to leave the top unbuttoned.

[11:11] I was disgusted with myself and I knew my pattern of hopeful thinking.

[11:15] Dad Brian, I know what you're thinking.

[11:18] We said tonight would be the last night.

[11:21] But you know what?

[11:22] One more night and tomorrow everything changes.

[11:26] Tomorrow the new us emerges.

[11:28] Tomorrow everything we want is going to come to fruition just tonight, one last time.

[11:34] I realized in that moment that I had more neurons than anything else in my life.

[11:39] And when those neurons turned on me, I didn't stand a chance.

[11:44] As I began to slip into submission, I jokingly muttered, Evening Brian, you make my life

[11:50] miserable.

[11:51] You're fired.

[11:53] No, don't do it.

[11:56] For some reason, acknowledging Evening Brian as a distinct version of me to be reckoned

[12:00] with changed something inside of me.

[12:03] Separating my various selves from their behaviors was empowering.

[12:07] I am not the behavior.

[12:09] I am somewhere else.

[12:11] And I prevailed.

[12:13] I didn't eat the brownies.

[12:15] This small win ushered in a wave of relief.

[12:18] Maybe I thought I wasn't trapped forever.

[12:22] I began to understand that there were parts of myself that were unique, each with their

[12:26] own motivations and proclivities.

[12:29] I spent the night thinking about technology that could commune with my various selves,

[12:34] hidden throughout the kingdom of my conscious and unconscious mind.

[12:38] There's Morning Brian, After Exercise Brian, Work Brian, Dad Brian, Storytelling Brian,

[12:44] Playful Brian, and Evening Brian, among many others.

[12:49] Each version of myself had a distinct biochemical configuration, states of sad and playful,

[12:56] stressed and angry.

[12:58] They created predictable patterns of thought, emotions, and behaviors.

[13:03] For example, Stressed Out Brian was approximately 100 times more likely to down a whole bag

[13:09] of potato chips than Morning Brian.

[13:12] Nobody eats ice cream when they wake up.

[13:14] Okay, maybe some of you do.

[13:17] Why then, right before bed, who's really in charge here?

[13:22] I've learned a few lessons building multiple technology and science companies that I think

[13:27] map to this situation.

[13:30] In technology, version one gives way to version two, which then launches version three.

[13:36] The valuable moves forward, the rest disappears.

[13:40] The key to building any advanced technology is trusting in the process where systematic

[13:46] and methodical improvements create compounded gains.

[13:51] This is the fastest way to advance anything.

[13:55] Why couldn't I apply this to my mind, my body, my health?

[13:59] All day, I labored to make abstract technology better, but while at home, I got worse.

[14:06] I got too little sleep, ate too much unhealthy food, and didn't exercise nearly enough.

[14:12] I had completely abandoned my personal rate of improvement for the sake of technological

[14:18] progress.

[14:19] I was a martyr for technological advance.

[14:24] What if I could improve myself at the speed that technology improves?

[14:28] What if we all could?

[14:30] What if decay and decline were not inevitable?

[14:34] In the 1990s, this would have been ridiculous to suggest, not anymore.

[14:39] With AI, it is now impossible to predict how well and how long we may live.

[14:45] The night I fired Evening Brian, my sleep didn't feel traumatic for the first time

[14:50] in years.

[14:51] It was rejuvenating.

[14:54] The next day, I was flying to a meeting.

[14:56] I had just gotten my pilot's license.

[14:59] As I journeyed to my destination, my gaze was fixed on the airplane's attitude indicator.

[15:05] My hands gripped the controls as my co-pilot and I steadied the plane at 10,000 feet.

[15:11] Flying is a never-ending activity of keeping the airplane in optimal position.

[15:15] I noticed a directional drift, so I corrected ever so slightly left, then down, then a tiny

[15:21] bit right.

[15:23] At the same time, I became lost in thought about work, all the fires that needed putting

[15:28] out.

[15:29] My mind wandered.

[15:30] I wondered how much better could I perform if I wasn't weighed down by poor sleep and

[15:36] declining health.

[15:37] I knew my life was not only stalled but in a tailspin.

[15:41] If I didn't do something soon, I would free fall straight to the ground.

[15:47] After a few more minutes at the airplane's manual controls, I ceded authority to the

[15:52] engineering automation of autopilot.

[15:55] I ceded authority to an intelligent system of software and hardware that ingested information

[16:01] from a suite of airplane sensors to make real-time flight decisions.

[16:07] Autopilot freed me to tend to other important flight tasks.

[16:11] The airplane sat up straight with perfect posture.

[16:15] I pegged the altitude with steadiness with greater precision than I could with my own

[16:20] abilities.

[16:21] The airplane's instant alignment triggered memories from my flight training in conditions

[16:25] of total blackout.

[16:28] When you learn to fly, you're tested by flying blind to any outside visual reference.

[16:34] In these moments, your instruments are your only source of truth.

[16:38] Relying solely on intuition in these crucial moments can be dangerous, fatal in fact.

[16:44] Without visual reference points, our body's internal sense of orientation often misleads

[16:49] us.

[16:50] The conscious mind tricks you into believing and seeing things that are not accurate.

[16:55] To simulate blackout conditions during pilot training, you're asked to wear a hood.

[17:01] It's terrifying at first.

[17:03] You can only see the instrument panels while you're flying.

[17:05] You have no outside reference.

[17:08] So using your instruments only, you have to safely get the airplane a few hundred feet

[17:12] above the runway and ready to land.

[17:15] You must master reliant upon instruments and trust them with your life.

[17:21] That day, my pilot training collided with my ideas around health and wellness.

[17:26] I wondered, could I build an autopilot for myself, one that would augment my natural

[17:31] abilities?

[17:32] But what should I call it?

[17:34] My autonomous self.

[17:36] I wondered if an automated system for health, body and mind could produce compounded gains

[17:42] in me as fast as we see in technological progress.

[17:47] After thinking this through, I was exhilarated when I landed.

[17:51] Blueprint.

[17:53] I would call it blueprint.

[17:58] Episode 3.

[18:00] What is blueprint?

[18:04] What single thing can any individual do to maximally increase the probability that humans

[18:11] thrive beyond what we can imagine?

[18:15] Blueprint's singular objective is to try and hitch a ride into the future by answering

[18:19] this question today.

[18:22] There are three main ideas.

[18:25] Number one, blueprint.

[18:27] Blueprint is an algorithm that takes better care of me than I can care for myself.

[18:33] We humans struggle to act in our best interests.

[18:37] We reliably do unhealthy things that accelerate decay, disability, disease and death, both

[18:43] to ourselves and to our planet.

[18:47] We will look both ways before crossing the street to avoid getting hit by a car, but

[18:51] we will do so while smoking a cigarette.

[18:54] Every day we do things that accelerate aging.

[18:58] And while we may think we can stop those things anytime we want, we are powered us to stop

[19:03] them all.

[19:04] Don't believe me?

[19:05] Try stopping all of your self-destructive behaviors.

[19:09] These include eating too much food or junk food, not exercising, smoking, excessive drinking,

[19:16] drugs, staying up past our bedtimes, pornography, excessive social media and dozens more.

[19:23] All of these things shorten our lives.

[19:26] All of them make life less enjoyable in the long run.

[19:29] They are payday loans and the interest is taken from the well-being, happiness and health

[19:34] of future you.

[19:36] It's not that we want to do bad things.

[19:39] Okay, yes we do.

[19:41] Bad things are easy and good things are hard.

[19:45] To protect ourselves from the raw reality that we are powered us to stop these behaviors,

[19:51] we create pretty stories to justify them.

[19:54] Live a little.

[19:55] We're all going to die anyway.

[19:57] We are masters at hiding the truth we don't want to see.

[20:02] Blueprint plays a new game called Don't Die.

[20:06] We in fact play it every day right now.

[20:09] We wear seatbelts, we change the batteries in our smoke alarms and throw out moldy food.

[20:15] Blueprint is Don't Die expert mode.

[20:19] This is how I personally play the game Don't Die.

[20:22] My team and I gather hundreds of biomarkers from my body.

[20:26] This allows my heart, lungs, liver and 70 other organs to speak for themselves.

[20:32] No more rumors, no more guesswork.

[20:35] After evaluating hundreds of scientific papers, we then create a health protocol.

[20:40] This algorithm determines what and when I eat, when I go to bed and so forth.

[20:46] My mind does not have the authority to order from a menu, eat a gallon of ice cream because

[20:52] it's nighttime or peruse the pantry because I'm bored.

[20:56] My body's organs and biological processes oversee the whole thing, not my mind.

[21:04] Sounds dystopic, right?

[21:05] This isn't the future you imagined.

[21:07] Just wait, it gets worse.

[21:10] Number two, the autonomous self.

[21:14] The goal of Blueprint is the autonomous self where each of us improves at the speed of

[21:20] Science and Technology.

[21:22] We are accustomed to our technology getting reliably better.

[21:25] Every year we get new versions of almost everything.

[21:28] Meanwhile, every day we humans reliably get one day closer to death.

[21:34] Your autonomous self reverses this trend by building upon the foundation of Blueprint

[21:39] to interconnect your well-being and personal growth with the progress of science and technology.

[21:48] Number three, Zeroism.

[21:51] Zeroism is the underlying philosophy of Blueprint and the autonomous self.

[21:56] You can think of Zeroism as a version of future literacy, a mindset and toolkit to navigate

[22:03] a rapidly changing and completely unknown future.

[22:08] Let's put future literacy in context.

[22:11] In 1820, only 12% of the world's population could read and write.

[22:17] Imagine what our daily lives would look like right now if we hadn't achieved an 86% basic

[22:22] literacy over the past two centuries.

[22:25] We'd probably be significantly less prosperous, healthy and interesting.

[22:31] Historically, future literacy was not imminently needed.

[22:36] Things changed slowly over the course of generations.

[22:40] Knowing seasonal weather patterns was good enough for most people.

[22:45] Today, there are tectonic, technological and cultural shifts that happen on the timescales of weeks, months and years.

[22:52] The pace of change will continue to accelerate.

[22:56] We are accustomed to thinking about human evolution on the timescale of tens of thousands,

[23:01] if not hundreds of thousands of years, not in single lifetimes.

[23:06] But that's where we are now.

[23:08] We need to be alert to the changes in store for individuals and humanity to triage the

[23:13] right path forward.

[23:15] Ultimately, this is a question of survival.

[23:19] Personally, after observing my own thoughts and behavior for 46 years, I do not trust

[23:25] my conscious mind.

[23:27] Not in a pantry full of junk food, not with my best long-term interests, not in explaining

[23:32] away my irrational behaviors.

[23:35] I do not think that humanity, as we are configured today as a society, can cooperate well enough

[23:43] and fast enough to avoid catastrophic outcomes.

[23:46] I think we need to hand over the reins of power.

[23:50] Humanity has reached its cognitive and attentional limits in managing a complex, ultra interconnected

[23:57] world.

[23:59] Zeroism is a way to understand, think and behave in a rapidly changing future, a way

[24:05] to anticipate and prepare for the unknown.

[24:10] Zeroism is the intelligence of not knowing, embracing what we do not know, what we cannot

[24:19] see, and acting courageously nonetheless.

[24:23] Throughout history, the number and concept of zero revolutionized math and physics, art,

[24:29] philosophy and religion.

[24:31] Our modern society depends on the power of zero, enabling computers, gaming, social media,

[24:37] GPS and medical technology.

[24:40] Some of history's most monumental breakthroughs are, as I like to call them, zero discoveries.

[24:46] For example, Einstein's theory of relativity, identifying microscopic germs as the culprit

[24:52] of infections, and the vanishing point in early Renaissance art that bridged the gap

[24:57] between 2D and the perception of 3D space.

[25:01] Each of these discoveries previously existed.

[25:03] They had just remained invisible until someone identified them.

[25:08] Zeroism captures this fundamental shift from status quo preservation and rule following

[25:13] of knowns to not knowing, exploring and adapting.

[25:18] In the past, discoveries of zero happened every few decades or centuries, popping our

[25:23] bubbles and revealing new dimensions that were previously unknown.

[25:27] For example, that the Earth was not the center of the universe.

[25:31] A few decades or even a century was enough time for society to reconfigure and update

[25:36] its beliefs, technology and culture.

[25:40] Zero discoveries are now happening at a much faster pace.

[25:45] That is because AI is a zero manufacturer.

[25:50] Insights generated by AI will introduce reality-bending zeros and demand that we quicken our adaptation.

[25:57] So there you have it.

[25:59] Three ideas.

[26:00] Blueprint.

[26:01] We humans are going to be run by algorithms because they are superior to us.

[26:07] Some of us will kick and scream the whole way, but this change is inevitable.

[26:13] Once we are through this transition, we will forget that we ever resisted the upgrade in

[26:18] the first place.

[26:19] In fact, we will pity our former selves.

[26:23] Autonomous self.

[26:25] We will begin improving ourselves at the speed of science and technology because we can.

[26:32] Zeroism.

[26:33] In a rapidly changing future, our best attribute is learning a new form of intelligence, which

[26:39] is not knowing, also known as future literacy.

[26:45] Zeroism is a response to the fact that humanity is facing at least three imminent and existential

[26:52] risks.

[26:54] 1.

[26:55] The risk of an unsustainable biosphere.

[26:57] 2.

[26:58] Misaligned AI.

[27:00] And 3.

[27:01] Mass destruction via nukes, bio warfare, societal collapse, etc.

[27:07] We need to choose our path forward.

[27:10] In evaluating these risks, do you think humanity, through nation-states, corporations, ideologies,

[27:17] and individuals, can cooperate and problem solve on the necessary timescales to avoid

[27:23] an insufferable existence and or extinction?

[27:25] I'll pose this question another way.

[27:29] Could it be the case that humanity would be better off rethinking how we make decisions

[27:35] going forward?

[27:36] In the same way that I did with my health, empowering science, data, and an algorithm

[27:43] to care for me better than I can care for myself.

[27:46] A computational system of intelligence.

[27:50] To help you build intuitions around computational systems, imagine empowering Earth's biosphere

[27:56] to manage its own well-being.

[27:59] We humans are currently in charge of deciding how much pollution and toxicity we generate

[28:05] and whether the oceans become more acidic, the planet warms, and more life becomes extinct.

[28:11] If our biosphere were in charge, we'd use the same blueprint process to fix its problems.

[28:17] We're including the biosphere via oceans, atmosphere, land, etc. via millions of data points, following

[28:24] scientific evidence for sustainable conditions, and empowering algorithmic adaptation for

[28:30] Earth's health markers to be achieved.

[28:33] Our biosphere sets the standards for pollution, toxins, wildlife, and weather.

[28:39] Humanity deals with it and adapts.

[28:43] The former me ate whatever he wanted, whenever he wanted it, no matter the damage.

[28:49] Current me opts into and follows what my body's organs are asking for in order to achieve

[28:55] ideal health.

[28:57] Most of humanity treats planet Earth the same way we treat our bodies, doing pretty much

[29:02] whatever we want, whenever we want, no matter the damage.

[29:07] That's why we're in the tricky predicament of having a rapidly changing and increasingly

[29:12] violent biosphere.

[29:15] In hearing these ideas, many confidently assert that my motivation for Blueprint is a fear

[29:20] of death.

[29:22] I do not fear death.

[29:24] I sat at its doorstep for a decade alongside chronic depression, desperately wishing I

[29:30] didn't exist.

[29:31] Had it not been for my three children, I probably would have taken my own life.

[29:36] I know what it's like to be locked into a staring contest with death.

[29:40] I now feel an insatiable love of life, deeper than I've ever experienced.

[29:46] I also know that my future self, perhaps the version that exists when superintelligence

[29:52] has arrived, may understand existence in ways that are inconceivable to me now.

[29:58] Here right now, wanting life can be hard for so many reasons.

[30:06] The depths of depression taught me a thing or two about this.

[30:09] I dream of an existence where we all want to keep playing the game of life, even in

[30:16] our darkest moments.

[30:17] For thousands of years it's been the same story.

[30:20] We're born and then we die in predictable fashion.

[30:24] Don't die is the ultimate game to play.

[30:28] Existence is the highest virtue.

[30:31] The concept of God is a zero, an idea to explain what we do not and cannot know.

[30:38] We are all zero and our existence could be that which we cannot imagine.

[30:43] It takes courage to confidently step into the unknown.

[30:48] Right now in the early 2020s, we can't assume that anything that has been true will continue

[30:53] to be true.

[30:55] Anyone who says otherwise doesn't understand what's really going on.

[30:59] Maybe superintelligence is already here and improving faster than our minds can comprehend.

[31:05] Blueprint is not just for me, it is for everyone.

[31:09] Blueprint is a plan to save ourselves.

[31:12] May we have the courage to believe that right now may be the very beginning.

[31:21] Episode 4.

[31:23] Zeroesm as a belief system.

[31:27] At the age of 34, I sold my payments company, Braintree Venmo, for $800 million in cash,

[31:33] the fulfillment of my life dream.

[31:36] Instead of it being this unhinged celebratory moment I'd imagined it would be, it was just

[31:41] one more complication to deal with.

[31:44] At the time, my 13-year marriage was unraveling.

[31:48] Our three kids were age 10 and under and it broke my heart and my brain to imagine that

[31:53] we would now be a split family.

[31:56] I was wrestling to leave my born into religion.

[31:59] I'd lost my bearings, not knowing anymore what was up or down, what was left or right.

[32:06] To help you better understand this moment, I'll paint the picture for you.

[32:10] When I was 19 years old, I was a Mormon missionary sent to Ecuador.

[32:14] It was the first time I went outside the rural bubble of Mormon Utah.

[32:20] I had come home only to question the nature of everything I had been taught.

[32:24] It became clear that I'd spent the first 19 years of life comfortably encased within

[32:30] several bubbles, unaware of their limitations and boundaries.

[32:34] I had been inside belief systems, wrapped in belief systems, each shaping my conscious

[32:41] mind.

[32:42] I had grown up in a rural community of 30,000 people.

[32:46] Everyone was Mormon.

[32:48] We all shared a singular understanding of existence.

[32:52] All I could think about now was what other bubbles am I in?

[32:56] If I had learned of these bubbles, how would my understanding of existence change?

[33:00] If that changed, what different decisions would I make?

[33:04] College was beckoning with its life decisions, major selection and career path.

[33:09] I hadn't the faintest idea what I wanted to be or study.

[33:13] The only thing I knew was that my experience in Ecuador had lit a raging fire inside of

[33:19] me.

[33:20] I wanted to spend my life working to improve the lives of others at a societal level.

[33:26] To do this, I determined that I'd make an enormous amount of money by the age of 30

[33:30] and then figure out a way to up-level humanity.

[33:33] I told everyone about my master plan.

[33:36] Nothing quite like the arrogance and wonder of a 21-year-old mind.

[33:41] Now 10 years later, I found myself encased, feeling paralyzed.

[33:47] Discovering my relationship with God and a marriage was a psychological conundrum that

[33:52] was perpetually unsolvable.

[33:54] A decade of chronic depression had not only dropped me into a black hole of hopelessness,

[33:59] but it had me questioning whether I could believe anything I thought or felt.

[34:05] When I was born into this world, I was told, follow these life rules and an omnipotent

[34:11] being will crown you with eternal life.

[34:14] Looking back, how beautifully simple if only it were true.

[34:18] I am not anti-religion.

[34:20] I am pro-existing and anti-death.

[34:24] Growing up, I was asked to bet my existence on a hypothesis that is testable only upon

[34:30] death.

[34:31] In any other time during the past few hundred thousand years, it wouldn't have mattered

[34:36] whether someone accepted the gamble or not.

[34:39] Everyone was guaranteed to die anyways, so why does it matter?

[34:43] But what if?

[34:44] What if?

[34:46] Then I read Zero, a biography of a dangerous idea by Charles Seif.

[34:52] Nothing in my mind has been the same since.

[34:56] What a surprise it was to learn that the number zero was discovered.

[35:00] Of course it was.

[35:01] I just hadn't thought about it.

[35:03] Like the idea behind every great revolution, the concept of zero wasn't birthed into the

[35:07] world easily.

[35:09] It caused a stir in philosophy, math, ideology, and society before it revolutionized each.

[35:16] Zeroism is a way of thinking that has helped me in my mental efforts to seek out the undiscovered,

[35:21] what lies beyond the fog.

[35:24] Zeroism digs into what is hidden in plain sight, just as the number zero once was.

[35:31] Zeroism allows me to reach out and feel the intangible and amorphous, the kinds of ideas

[35:36] that have been transformative to civilizations.

[35:39] Heliocentrism, the discovery of germs, and Einstein's special and general theories of

[35:44] relativity.

[35:46] These are all zeros.

[35:47] They were not discovered based upon what we knew, but rather what we didn't know.

[35:53] Zeroism is a system of thought one level deeper than so-called first principles thinking.

[35:59] If zeroth principle thinking is building blocks, first principle thinking is all about understanding

[36:04] the nature of the building blocks.

[36:07] Many people spend their entire lives as first principle thinkers.

[36:12] There's nothing wrong with this.

[36:14] Some professions demand it.

[36:17] Consider the fictional character Sherlock Holmes.

[36:20] He's a first principle thinker.

[36:22] He believes, according to his famed maxim, that when you have eliminated the impossible,

[36:28] whatever remains, however improbable, must be the truth.

[36:34] Zerothism invites you to think like Dirk Gently, a different kind of detective conceived

[36:39] of by the sci-fi humorist Douglas Adams.

[36:43] Dirk Gently is a zero detective.

[36:47] The zero detective rules nothing out.

[36:50] Instead, they begin by wondering if the impossible solution makes more sense.

[36:55] It is a search for the building blocks themselves, not how they work.

[37:01] When someone is thinking from a first principled perspective, they're likely going to start

[37:06] by assuming the fewest number of things within a given time frame.

[37:11] Zeroism, however, doesn't try to identify or rule out the impossible.

[37:17] It wonders if we might not be seeing the structure of reality immediately in front of us.

[37:24] Zeroism leads us to the possible, not the probable, to the previously undiscovered.

[37:31] Zeros are game changers.

[37:33] In a world where we are continuously expanding our spheres of understanding, each zeroth

[37:39] principle insight can potentially unlock a set of more expansive spheres.

[37:44] This is bigger than an exponential effect, a hockey stick curve.

[37:49] With a zero, the graph is not just exponential, the units change.

[37:54] The graph reorganizes its axes.

[37:58] are added to accommodate ideas from another dimension.

[38:03] That's what zeros get you.

[38:05] Zeroth principle thinking paves the way to explore boundless terrain.

[38:11] We have entered an era that will be defined by an avalanche of zero-like ideas.

[38:17] That is because AI is a zero factory.

[38:21] Understanding our very existence has become a quest for zeros.

[38:26] Currently we are seeing zeroth principle ideas come to fruition at a faster rate than

[38:31] ever before, which means we're driving into this future with fog in all directions.

[38:38] Previously we humans could make reasonable assumptions about what might exist beyond

[38:43] our field of view.

[38:44] Deductive reasoning and a dash of first principle thinking every century or so used to be enough.

[38:51] Not anymore.

[38:53] Evolving, adapting, and collaborating with AI will require fluency in zeroism.

[39:00] To that end, I've been working on a system for evolving ourselves so that we humans can

[39:05] roll with the changes that AI brings, no matter how the terrain shifts.

[39:11] The goal is to align ourselves towards harmony instead of our own demise.

[39:16] So what is the potential payoff?

[39:18] Well, that's a surprise.

[39:21] To begin our journey as zeroists, we can seek to identify the building blocks of intelligence.

[39:28] Our bodies and minds are a fruitful place to begin this exploration.

[39:37] Episode 5, The First Supper

[39:42] The ideas around blueprint and zeroism challenge the core of our current identities.

[39:47] Many people upon hearing these ideas for new models of the future, either boomerang

[39:52] some form of hatred or fall into existential despair, unsure how to climb out of the pit

[39:58] dug by their previous selves.

[40:01] I've seen this happen so many times that I realized I needed to soften the blow, which

[40:06] led me to start hosting blueprint brunches, which I call The First Supper.

[40:12] In a private and comfortable setting, I present the ideas and then create enough space and

[40:16] time for the participants to have multiple existential crises.

[40:21] At the suppers end, the participants rise with a basic level of reconciliation.

[40:27] My favorite follow-up notes to receive are those that explain how the conversation broke

[40:32] their brain in ways they didn't know it could be broken.

[40:36] And then they add, they haven't been able to stop ruminating about the ideas since.

[40:41] I suspect that the more time the idea sits in your mind, the more its inevitability

[40:46] creeps in.

[40:48] It's hard to find arguments supporting the idea that we can remain as we are and survive.

[40:54] The evolutionary pressure presented by advances in AI, a changing biosphere and unforeseen

[41:00] and unpredictable future events invite, no, demand that we improve our speed of evolutionary

[41:06] adaptation.

[41:09] First Supper guests have included scientists, philosophers, engineers, educators, artists,

[41:15] astronauts, doctors, and entrepreneurs.

[41:18] Each gathering lasts around two to three hours.

[41:20] The mills all have the same arc.

[41:23] To prepare the group for the provocative nature of the conversation, I provide some thought

[41:27] experiments.

[41:29] I start with a simple question we all supposedly know the answer to.

[41:33] How many glasses of water should you drink each day?

[41:36] Eight, someone always says.

[41:39] How do you know that?

[41:40] I ask.

[41:41] I read it somewhere.

[41:42] Or maybe my parents told me.

[41:43] I don't know.

[41:45] This exercise highlights what most of us experience daily across the majority of the decisions

[41:49] we make about health and wellness.

[41:52] Our insides are off limits.

[41:54] We use hunches, folk remedies, and antidotes.

[41:58] We act upon information that we feel we know without truly knowing if the answer is right

[42:03] for us.

[42:05] Is drinking eight glasses of water correct for each of us every day?

[42:09] How would we know the optimal amount at any given time?

[42:13] Perhaps if there were a way to measure and evaluate the effectiveness of our water drinking

[42:18] on our individual health, we could arrive at an evidence-based result.

[42:23] This process is the essence of Blueprint to implement with the precision of scientific

[42:28] evidence and data.

[42:31] Soon the food arrives.

[42:33] Everyone has a full Blueprint spread in front of them.

[42:36] The result of algorithmic design.

[42:39] Each guest is primed and ready to go on a wild ride of new ideas.

[42:44] Then we begin.

[42:46] I pose my next question.

[42:49] Imagine that you have access to an algorithm capable of generating the best physical and

[42:54] mental health of your life.

[42:57] An algorithm personalized to you that can take better care of you than you can take

[43:02] care of yourself.

[43:04] Opt in and the benefits are yours.

[43:07] The catch, what you eat, when you eat, and other health-related decisions will be determined

[43:13] by the algorithm.

[43:15] This algorithm has been built upon the best science available from peak performance, health,

[43:20] and medicine.

[43:22] This means that pantry grazing, spontaneous pizza parties, and junk food binging will

[43:27] no longer be options.

[43:29] Do you opt in or decline?

[43:32] Usually a rare few respond in the affirmative.

[43:35] Yes, please, anything to save me from myself.

[43:39] A third or so wants to make modifications to the deal.

[43:43] Yes, but I'd like to make sure I keep the following things.

[43:47] The rest, more than half, may give a direct and blunt no, or something like, this is creepy,

[43:53] is this a cult?

[43:55] The reasons for individual refusal are often tied to personal attachments to existing lifestyles,

[44:02] and life and death.

[44:04] Or to imagine conflicts that a fully implemented autonomous self would bring to cultural and

[44:11] social relationships.

[44:13] The thought experiment is of course oversimplified by design.

[44:18] To sincerely answer the question, we'd each need to ask hundreds more questions.

[44:23] My personal experience is that two years after saying yes to the thought experiment, I've

[44:29] happier, healthier, more emotionally stable, or as intellectually alive.

[44:35] In fact, I pity my former self, who was continuously tortured by his mind and unable to live his

[44:42] best life.

[44:44] It's understandable if this contemplated exercise invites an existential crisis of

[44:48] sorts right now.

[44:50] This is a big idea, as big as the earth not being the center of the universe.

[44:57] By the end of 2020, I had hired a team of doctors, set up a lab in my home, and become

[45:02] the most measured human in history.

[45:05] No more evening Brian, no more late night binges, no more just this once rationalizations,

[45:12] no more mid-flight stalls.

[45:15] Since I began Blueprint, I have come to appreciate the many problems my autonomous self has solved

[45:20] in my daily existence.

[45:22] I no longer spend any amount of time thinking about my next meal or when to go to bed.

[45:27] I no longer grapple with whether to do this naughty thing that will accelerate my speed

[45:32] of aging.

[45:33] My mind is now free to focus on more enduringly rewarding things, such as the future of intelligent

[45:40] existence.

[45:42] This first version of Blueprint and my autonomous self took years and millions of dollars to

[45:47] build.

[45:49] Most of the processes are still manual, clunky, and require a large team.

[45:54] It's impractical to imagine the current version being scaled throughout society.

[45:58] That's okay, because this is how innovation works.

[46:02] The first versions are expensive, manual, and buggy.

[46:06] The key is looking past the awkward things and finding the gem of inevitability that's

[46:11] been demonstrated.

[46:13] When the first telegraph message was sent, it was clear the Pony Express was riding into

[46:18] the sunset.

[46:20] When digital navigation via GPS appeared, paper maps on the laps became a thing of

[46:25] hobby.

[46:27] Blueprint has demonstrated a new approach to managing our self-destructive tendencies.

[46:33] A new approach to imagine our future selves.

[46:37] Can you imagine a future where we don't want to inflict the type of self-harm that

[46:42] hastens death?

[46:43] You're probably thinking that once your new system is set up, you'll sneak away and

[46:48] insert your favorite vice here.

[46:51] Just kidding, no I'm not.

[46:53] I'm betting that you won't want that vice anymore.

[46:56] Your life and reality will be so fundamentally changed that you'll look back on your former

[47:01] self as I do now with pity.

[47:05] Maybe I'm wrong.

[47:06] Maybe indulgences of these types we enjoy today will be a new form of entertainment.

[47:11] They'll just be simulations that make the real and perceived indistinguishable, all

[47:17] while having built-in mechanisms to protect us from any biological harm.

[47:22] I'm playing here to make a point.

[47:24] Our minds often foreclose on future possibilities, thinking that something is impossible or undesirable

[47:32] before we've even tried it.

[47:34] So let's think about this.

[47:36] What might be the future of our well-being?

[47:40] I have an idea.

[47:41] From building Braintree Venmo, the payments company I founded, we worked with a ride-share

[47:46] company to eliminate all payment friction associated with traveling somewhere.

[47:51] You held a car from the app, arrived at your destination, and then left.

[47:57] No pulling out your credit card, no janking machines, no awkwardness, no waiting in the

[48:02] car as the transaction took minutes to complete or fail.

[48:07] Payment and tip happened, like magic, all behind the scenes.

[48:11] Payment was forgotten.

[48:14] The future of our well-being might be similar.

[48:17] Magic happening behind the scenes, something that's forgotten and doesn't need to be

[48:22] attended to anymore.

[48:24] AI will be omnipresent.

[48:27] Data from our bodies will be streamed real-time.

[48:30] Systems outside and inside our bodies working in unison, personalized to our needs.

[48:37] Like the stock market, micro-corrections to our biological processes will happen on

[48:41] the time scale of milliseconds.

[48:44] We won't even notice.

[48:46] We will be too busy focused on the next games we've chosen to play with our superior abilities.

[48:53] Blueprint will be applied to the care of Earth, too.

[48:57] Millions of measurements taken from around the globe, science and data determining optimal

[49:03] conditions with protocols and therapies put into action.

[49:08] It seems so basic, so obvious.

[49:12] A desire to exist.

[49:15] No qualifiers, no conditionals, no pretending that any of us have the wisdom to see this

[49:22] next chapter of intelligent life and decide ahead of time whether the future is suitable

[49:29] or desirable for us.

[49:31] We choose life over death.

[49:34] Unconditionally.

[49:35] But we already do, you might contend.

[49:39] That's not what the data shows about our individual or collective behavior.

[49:43] The data demonstrates that we are suicidal, optimizing for temporary pleasure because

[49:49] we have concluded that death is inevitable and we might as well make the most of the

[49:54] time we have.

[49:56] Could a want for life be the most significant revolution to happen in the early 21st century?

[50:03] Sometimes we excitedly race into the future.

[50:07] Other times the future pulls us into her magnificence as we kick and scream.

[50:13] We take credit for seeing the future and forget that we ever opposed it.

[50:17] You can't go back and change the beginning, but you can start where you are and change

[50:22] the ending.

[50:24] Gen Zero, let's join with Zeroism as our blueprint and lead the charge for this want

[50:32] to exist into what will be perhaps the most marvelous existence to ever occur in this

[50:38] galaxy.

[50:44] Episode 6 Dear Gen Zero

[50:49] You may or may not be aware that I recently released a book, Don't Die.

[50:54] It's a fictionalized but mostly truthful account of the internal struggles my mind went through

[51:00] as it contemplated blueprint, autonomous selves, Zeroism, AI, and the continued survival of

[51:07] our species.

[51:08] I've been trying to write that book and this piece for 10 years.

[51:13] I could always feel the ideas somewhere in the back of my mind.

[51:16] Eventually the concepts have formed and now solidified into a state where they seem understandable,

[51:22] teachable, and actionable.

[51:25] Biographies have always been a personal lighthouse for me in trying to understand life.

[51:30] Instead of learning the broad strokes of history, philosophies, and technology where this or

[51:35] that thing happened in this or that year, biographies give a first-person account into

[51:41] the thoughts, emotions, context of their time and place.

[51:46] They provide models of thinking, problem solving, and persistence to emulate.

[51:51] When I read biographies, I focus on how the person was able to see something so clearly

[51:57] that was invisible to everyone else at the time.

[52:00] Often this is why many of history's most influential characters were written about

[52:05] hundreds of years after their deaths.

[52:08] It takes time for a society to form that clarity of perspective.

[52:13] As I record this, it's 2023.

[52:16] If I challenge myself to learn from the biographies of history's explorations and muster maximum

[52:21] sobriety trying to discern what is really going on right now, I'd start with three

[52:26] observations.

[52:29] Number one, artificial intelligence is improving faster than we can comprehend and in ways

[52:35] we cannot predict.

[52:37] Number two, the biosphere of our planet is in question.

[52:41] And three, we humans are dangerously at each other's throats.

[52:46] Obvious, right?

[52:48] So what do we do now?

[52:52] Is humanity's focus correctly and adequately trained on addressing these issues before

[52:58] they create catastrophe and possibly annihilation?

[53:02] I personally don't think so.

[53:05] Certainly not in proportion to the existential risk that each represents.

[53:10] Why?

[53:11] There are many possible answers one might hear to this question.

[53:15] What power does each of us have individually to change anything?

[53:20] Shouldn't governments be addressing problems like these?

[53:23] Other goals such as professional advancement, family or other personal matters are higher

[53:28] priority.

[53:29] What responsibility do we have to future generations?

[53:33] Why do we care to exist anyways?

[53:37] We're all going to die, so why does it matter?

[53:39] It doesn't matter what happens to humans or Earth.

[53:43] It's the next life that's worth living for.

[53:46] These common perspectives underscore the problem we humans have in aligning our goals.

[53:52] We want and value different things.

[53:55] We understand our lives in unique ways.

[53:58] We are fractured in our opinions and goals about what matters in life and why.

[54:04] We know from studying history that each era lives in their own bubble of norms and beliefs.

[54:11] Change is inevitable and future humans will consider us primitive relative to their superior

[54:17] ways, just as we do with previous generations.

[54:21] What is primitive about us that future generations will observe?

[54:26] What is potentially inevitable in the future that we may want to pull into our present?

[54:32] To help us with this thought experiment, let's time travel, visualizing people in the 25th

[54:38] century learning about and reflecting upon the early 21st century.

[54:43] That's us right now.

[54:45] They're marveling at how we figured out how to keep humanity and maybe ourselves alive

[54:50] amid the greatest existential risks faced by humans up to that point.

[54:56] What do they observe?

[54:58] My best guess is that the biographies available in the 25th century will conclude that up

[55:04] until the early 21st century, the human mind was the superior technology of intelligence

[55:10] on earth.

[55:11] The human mind, as long as it played within society's laws, was free to decide what,

[55:16] how, when, and why to do things.

[55:19] Then, in the mid-2020s, there was a swift societal change.

[55:25] The combination of artificial intelligence and the programmability of biology and chemistry

[55:30] proved that computational abilities were superior to the human mind for managing a wide variety

[55:37] of individual and societal systems, from personal health to biological ecosystems.

[55:43] Most importantly, a seedling of an idea sprouted in the late 2020s, becoming the biggest revolution

[55:51] in human history.

[55:53] Death transitioned from inevitable to don't die.

[55:58] I personally went through this transition from my mind having unquestioned authority

[56:03] and death being inevitable to opting into an algorithm to manage my health with the

[56:09] objective of not dying.

[56:12] Before Blueprint, my proclivities seemed hardwired to naturally be self-destructive.

[56:18] These self-destructive behaviors worsened until I was in the throes of an existential

[56:23] crisis.

[56:24] I had tried to stop my binge eating hundreds of times, but was powerless.

[56:30] It was clear, either I stopped my self-destructive behaviors or I was going to the grave early.

[56:36] I was choosing death each day and I couldn't stop myself.

[56:42] I was doing something to myself that I knew was bad for me.

[56:45] I knew that it made me feel terrible.

[56:48] So why did I continue to do it?

[56:50] Sound familiar?

[56:52] Why is that so easy to relate to?

[56:55] I was just like humanity at large, fracturing in my own opinions and wants in life, facing

[57:02] existential crisis daily, misaligning.

[57:07] After hundreds of failed attempts, I found a solution that worked for me and stopped

[57:11] my binge eating.

[57:13] The success of this strategy got me wondering if there were other systems that are even

[57:17] more trustworthy than my various selves to look after my best interests.

[57:23] Blueprint has proven trustworthy wherever my mind reliably fails me.

[57:28] Blueprint is an algorithm that takes better care of me than I do.

[57:32] The algorithm has generated based upon hundreds of biological measurements near perfect health.

[57:38] It is a revolution of self within self.

[57:42] Blueprint considers that I am a collection of 35 trillion cells and aligns all of them

[57:49] towards a single objective, continued existence.

[57:54] For more than two years, my commitment to Blueprint has remained constant.

[57:57] I know after living with my mind for 45 years that it is a devious, cunning and ruthless

[58:04] rascal willing to do anything and tell any cheerful story to get what it wants frequently

[58:11] to my detriment.

[58:13] This makes me wonder who is the enemy?

[58:16] Should we point our fingers outward or look within?

[58:21] For our entire lives, our minds have been our best and only tool to navigate existence.

[58:27] It's nearly unthinkable that we'd willingly opt into a new way of being that gives power

[58:33] to other systems of authority that are superior at looking after our best interests.

[58:39] This is the singular question of our time.

[58:43] Do we humans, with our nation states, corporations, ideologies and individual behaviors, believe

[58:50] that we can solve the existential crisis before catastrophe?

[58:55] Or do we need to opt into superior computational systems that will better manage our interests?

[59:03] Confronting new ideas often hurts.

[59:06] They challenge our identities and make us feel uncomfortable.

[59:10] Most of the time, we just want them to go away and will say, think and do anything to

[59:17] hasten their disappearance.

[59:20] When trying to solve difficult problems, sometimes we need to look for the new ideas in the dark,

[59:26] where it's potentially a little scary.

[59:29] If we are to succeed against the formidable existential threats we face, just how far

[59:34] and wide do we need to consider searching?

[59:37] I like starting at the opposite end of comfort, in the dark, where we haven't dared look,

[59:43] seeking out and embracing discomfort.

[59:46] Being open-minded is hard work.

[59:48] It's a skill of wrestling and negotiating with counter intuition.

[59:53] It might be the single most valuable character trait for any human as we step into this new

[59:58] future.

[59:59] If you're hearing this for the first time, I can imagine what you might be feeling and

[60:04] thinking.

[60:05] If you need to, pause this video and go for a walk.

[60:08] I understand.

[60:10] When you return, let's explore this new framework of open-mindedness and adaptability,

[60:15] zeroism.

[60:17] The only things on humanity's to-do list are, don't die, don't kill each other, don't

[60:24] destroy our biosphere, and align AI with don't die.

[60:30] Crisis has been a near permanent fixture of history, war, plagues, natural disasters,

[60:35] despotism, and more.

[60:37] A crisis-free world is out of reach for the foreseeable future, but we can learn from

[60:42] crisis by improving, iterating, and aligning.

[60:46] Alfred North Whitehead observed in 1911,

[60:55] We master this process as babies.

[61:00] We learn to crawl, then walk, and then run.

[61:04] Somewhere along that learning process, we no longer need to think about the movements.

[61:08] They just happen as a byproduct of pursuing some other goal.

[61:13] Societal scaffolding works in a similar way.

[61:16] When we buy a washer and dryer, we don't wonder whether it will fit through the front door.

[61:20] We just know they will without thinking about it.

[61:23] They were designed to fit easily through the average US doorway, after all.

[61:27] Traffic lights, green, yellow, and red, are timed to mirror human biological abilities

[61:32] to react.

[61:34] Their timings are not random, but constrained, designed around our physical limits.

[61:39] There are thousands of such building blocks that enable our societal advance from version

[61:43] 1, to version 2, to version 3.

[61:46] They are the invisible rebar of society's scaffolding, and are everywhere.

[61:52] Each generation builds in the context of their time.

[61:56] I love this quote from John Adams as he worked to establish the United States,

[62:01] I must study politics and war that my sons may have liberty to study mathematics and

[62:06] philosophy.

[62:08] The biggest question of our time is this, how do we marry societal advance, technological

[62:14] improvements, and our own evolutionary advance all in tandem?

[62:19] Early in 2017, I was in the Middle East with the leader of a country.

[62:24] He was telling me about his country's 2030 goals.

[62:27] I marveled at how one could plan 13 years in advance with the speed of technological

[62:32] change.

[62:33] He asked me how I'd think about the task.

[62:35] I replied by asking him to play a game with me.

[62:39] Say we have a robot that we're trying to get to the furthest sand dune on the horizon.

[62:44] We can take one of two approaches.

[62:46] First, we could take a topographical map of the dune and send the robot off.

[62:52] The problem with this approach is that the sands will shift, causing the terrain to change,

[62:57] quickly making our maps irrelevant.

[62:59] The robot will get stuck in the sand.

[63:02] The alternative is to engineer the robot with the tools it needs to navigate any possible

[63:07] terrain change.

[63:09] I call this future literacy.

[63:12] Of course, zeroists may even question the premise of a destination.

[63:17] Maybe the horizon isn't the goal.

[63:20] It's a relief that we don't need to know what the future will bring to be prepared,

[63:25] because we'll never know the future with certainty.

[63:28] It's reassuring that we can journey into the unknown with systems and tools that will

[63:33] allow us to respond and adapt quickly to any environment.

[63:37] Adapting not knowing is our new superpower.

[63:42] There is, however, another option, and it is the easiest.

[63:45] We could choose to put our heads in the sand instead of traversing it.

[63:50] Solved.

[63:51] The problem with this, of course, is that four billion years of evolution demonstrates

[63:55] that a biological species survives and thrives according to its rate of adaptation, not its

[64:01] rate of looking the other way.

[64:04] How will we adapt to an increasingly complex, ever-changing world full of unknowns?

[64:09] We might be inclined to imagine that optimal human adaptation needs to rely on complex

[64:14] systems, and that complex problems require complex solutions.

[64:19] However, our journey to adapt may be condensed to just a few simple rules.

[64:25] Flocks of birds and schools of fish self-organize into sophisticated emergent patterns using

[64:31] simple rules.

[64:33] Flying insects do the same to find their food.

[64:36] Under the hood, some of the most advanced systems of intelligence, including AI, are

[64:41] often just efficient ways of searching through options from simple rules.

[64:46] Miraculous, robust behaviors and systems can emerge on the other side of simplicity.

[64:53] Said another way, the speed and unpredictability of this early 21st century is the most dynamic,

[65:01] unknowable challenge we've ever faced.

[65:04] The

[65:08] The mammalian mind is adaptable.

[65:12] It learns from the past and models the future.

[65:15] It is also full of quirks, biases, and tendencies towards self-destructive behaviors that stifle

[65:22] our improvement.

[65:24] We can use our own simple set of rules to guide our cognitive scaffolding and up-level

[65:29] our cognition to capabilities previously unimagined.

[65:34] Louis Pasteur once remarked,

[65:36] Fortune favors the prepared mind.

[65:39] I offer a revision in this context.

[65:42] Fortune will favor the adaptable mind.

[65:45] To adapt, we must align our goals and systems at every level, from the philosophical to

[65:50] the atomic.

[65:52] What will we need to adapt to?

[65:54] Only the most significant event in all of history, AI.

[65:59] No previous generation has been able to contemplate a future of unlimited potential the way we

[66:04] can today.

[66:06] For all of history, death has been inevitable.

[66:09] One day, we are born.

[66:11] Another day, hopefully a long time later, we die.

[66:14] That may not be the equation anymore.

[66:17] We simply do not know what the future holds.

[66:21] There are many differing opinions on AI.

[66:23] Some people are convinced that humans are inevitably doomed.

[66:26] Others believe AI will save us from ourselves.

[66:29] No one has been able to reliably predict the rate and contours of AI progress.

[66:35] That won't change.

[66:37] What we do know is that AI is improving faster than we can comprehend and in ways that we

[66:42] cannot predict.

[66:43] With the current world order, there is no way to stop or even slow its development.

[66:49] Nation states, corporations, ideological groups, individuals, and everyone in between will

[66:55] unabashedly use AI for their self-interested objectives.

[66:59] Time traveling again to the 25th century and looking back, what might we observe?

[67:05] In the early 21st century, Gen Zero, a multi-ethnic, multinational, multigenerational uprising

[67:12] organized around zeroism, rose to power and built global systems of goal alignment within

[67:19] self between humans and Earth and AI.

[67:23] Back in today's world, we can ask, when do we need to begin working earnestly on

[67:28] this endeavor to align all of intelligence in a small corner of the galaxy?

[67:33] AI is kind of a time travel or as our car mirrors remind us, objects in mirror are closer

[67:40] than they appear.

[67:42] The human mind struggles to understand exponential phenomena as we careen into the future.

[67:49] This can create dangerous blind spots and lack of preparation.

[67:54] Wisdom would have us act now.

[67:56] If we delay, we'd load ourselves with additional existential risk, putting off for later what

[68:02] needs to be done and can be done now.

[68:05] The major problem is that we tend to be reactionary and not proactive.

[68:11] For the most part, humans only engage in meaningful change after disaster strikes.

[68:17] It's one of the failings of the human mind and a scenario that may be unforgiving to

[68:22] our survival.

[68:23] What if one day there is a disaster so large we cannot react after it happens because we

[68:29] are no longer around?

[68:31] 25th century wisdom invites us to achieve levels of goal alignment within self, between

[68:37] ourselves and AI, and between ourselves and Earth that are unthinkable to us right now.

[68:42] To do this, we need to act on presumed time scales that will certainly make us uncomfortable.

[68:49] Whatever opinion one has about AI, the precarious instability of our biosphere requires that

[68:54] we do act now.

[68:57] Alignment with Earth includes many of the same fundamental processes as aligning with

[69:02] AI and many of the same processes as aligning within self.

[69:07] Today my daily routine is trialing a version of human AI alignment.

[69:12] The past two years of Blueprint required that I goal-align my 35 trillion cells around

[69:17] continued existence.

[69:19] The first thing I did was label any behavior that increased my speed of aging as an act

[69:24] of violence.

[69:26] Extensively measuring my body and capturing thousands of data points allowed me to begin

[69:32] identifying which foods, behaviors, and lifestyle choices slowed my speed of aging and those

[69:38] which increased it.

[69:40] A heuristic I applied is simple.

[69:42] Remove all things that increase the speed of aging and implement things that slow my

[69:47] speed of aging.

[69:49] So how did I do?

[69:51] In a group of more than 2,000 anti-aging athletes longitudinally measuring their speed of aging

[69:58] using a state-of-the-art DNA methylation algorithm, I ranked number one in the world for greatest

[70:04] reduction in the velocity of aging.

[70:07] I demonstrated that an individual could reconfigure their systems of decision-making and authority.

[70:13] I tamed my devious and conniving mind, eliminated all self-destructive behaviors, and achieved

[70:21] zero violence within self by empowering other systems, in this case my individual organ

[70:27] systems, with authority.

[70:31] I did this to try and demonstrate a beta version of a future human.

[70:36] Human Intelligent Being Trying to Self-Align

[70:40] Building AI systems that align with continued human existence is a discipline in itself

[70:45] and deserving of the greatest minds of our generation.

[70:48] For anyone not directly working on AI alignment, you can contribute by knowing that aligning

[70:54] all intelligence is the central opportunity of our time.

[70:58] It can begin with self.

[71:01] Every second of every day, my mind is searching for the most important piece of information

[71:07] available.

[71:08] What is the highest value use of my time?

[71:12] What is the most piercing insight at this moment?

[71:16] What can't I see, but if I could, would change the way I understand reality?

[71:21] A constant search for E equal MC squared.

[71:25] The task that I've given myself is to write you, dear listener, a letter that prepares

[71:30] you for the future, to provide you the best wisdom that I'm capable of communicating.

[71:36] This is the best I've got right now, which, as you know, will change in one hour's time.

[71:43] This is what I know.

[71:45] The future is not going to be like the past.

[71:48] Every generation before you had the advantage of being able to learn from the past and model

[71:53] out their lives, the way they thought about individuals, countries, currencies, and relationships,

[71:59] about ambition, regret, happiness, and sadness.

[72:02] The wise would diligently study to identify what they didn't need to relearn and then

[72:08] with extra time set off to discover something new.

[72:11] The unwise perceive their novelty of experience as original when it was in fact predictable,

[72:18] limiting their time and ability to discover.

[72:21] For the first time, we cannot look into the past to accurately model the future.

[72:27] Some fundamental underpinnings of history as we know it may carry over.

[72:32] However, the zeroth nature of computational intelligence, AI, steals away control from

[72:39] using the past to create any future predictions.

[72:44] We cannot know which historical or present phenomena will persist and what will defy

[72:51] our expectations.

[72:53] Be aware and cautious about what assumptions you make about the future.

[72:57] Try to isolate each assumption and examine it carefully.

[73:02] Simply being aware of the scaffolding you're standing on will enable you to react faster

[73:08] and with improved clearheadedness when you see other patterns emerge in the world.

[73:13] If you can make this a staple in how you process information, you'll be able to sift through

[73:19] arguments, trends and norms and keep your mind in sync with technological advance.

[73:24] You won't live in the past.

[73:27] Here are three simple questions you can ask to enhance your predictions when you're trying

[73:32] to make life decisions.

[73:34] First, what must remain true for this to continue to be true?

[73:40] Second, what new thing would make this untrue?

[73:45] And third, what wildly unexpected surprise would change the question?

[73:51] It used to be that society and science advanced one funeral at a time.

[73:57] Now society advances at the speed at which it rejuvenates.

[74:06] Thank you for listening.

[74:07] If you liked this, check out my other books, Don't Die and We the People.

[74:12] Wishing each of you all the best.